6 research outputs found

    A Machine Learning Approach for Classifying Textual Data in Crowdsourcing

    Get PDF
    Crowdsourcing represents an innovative approach that allows companies to engage a diverse network of people over the internet and use their collective creativity, expertise, or workforce for completing tasks that have previously been performed by dedicated employees or contractors. However, the process of reviewing and filtering the large amount of solutions, ideas, or feedback submitted by a crowd is a latent challenge. Identifying valuable inputs and separating them from low quality contributions that cannot be used by the companies is time-consuming and cost-intensive. In this study, we build upon the principles of text mining and machine learning to partially automatize this process. Our results show that it is possible to explain and predict the quality of crowdsourced contributions based on a set of textual features. We use these textual features to train and evaluate a classification algorithm capable of automatically filtering textual contributions in crowdsourcing

    Patterns of Data-Driven Decision-Making: How Decision-Makers Leverage Crowdsourced Data

    Get PDF
    Crowdsourcing represents a powerful approach for organizations to collect data from large networks of people. While research already made great strides to develop the technological foundations for processing crowdsourced data, little is known about decision-making patterns that emerge when decision-makers have access to such large amounts of data on people’s behavior, opinions, or ideas. In this study, we analyze the characteristics of decision-making in crowdsourcing based on interviews with decision-makers across 10 multinational corporations. For research, we identify four common patterns of decision-making that range from structured and goal-oriented to highly dynamic and data-driven. In this way, we systematize how decision-makers typically source, process, and use crowdsourced data to inform decisions. We also provide an integrated perspective on how different types of decision problems and modes of acquiring information induce such patterns. For practice, we discuss how information systems should be designed to provide adequate support for these patterns

    CAN LAYMEN OUTPERFORM EXPERTS? THE EFFECTS OF USER EXPERTISE AND TASK DESIGN IN CROWDSOURCED SOFTWARE TESTING

    Get PDF
    In recent years, crowdsourcing has increasingly gained attention as a powerful sourcing mechanism for problem-solving in organizations. Depending on the type of activity addressed by crowdsourcing, the complexity of the tasks and the role of the crowdworkers may differ substantially. It is crucial that the tasks are designed and allocated according to the capabilities of the targeted crowds. In this pa-per, we outline our research in progress which is concerned with the effects of task complexity and user expertise on performance in crowdsourced software testing. We conduct an experiment and gath-er empirical data from expert and novice crowds that perform different software testing tasks of vary-ing degrees of complexity. Our expected contribution is twofold. For crowdsourcing in general, we aim at providing valuable insights for the process of framing and allocating tasks to crowds in ways that increase the crowdworkers’ performance. Secondly, we intend to improve the configuration of crowdsourced software testing initiatives. More precisely, the results are expected to show practition-ers what types of testing tasks should be assigned to which group of dedicated crowdworkers. In this vein, we deliver valuable decision support for both crowdsourcers and intermediaries to enhance the performance of their crowdsourcing initiatives

    COMBINING COLLECTIVE AND ARTIFICIAL INTELLIGENCE: TOWARDS A DESIGN THEORY FOR DECISION SUPPORT IN CROWDSOURCING

    No full text
    Crowdsourcing represents a powerful approach that seeks to harness the collective knowledge or creativity of a large and independent network of people for organizations. While the approach drastically facilitates the sourcing and aggregating of information, it represents a latent challenge for organizations to process and evaluate the vast amount of crowdsourced contributions – especially when they are submitted in an unstructured, textual format. In this study, we present an on-going design science research project that is concerned with the construction of a design theory for semi-automated information processing and decision support in crowdsourcing. The proposed concept leverages the power of crowdsourcing in combination with text mining and machine learning algorithms to make the evaluation of textual contributions more efficient and effective for decision-makers. Our work aims to pro-vide the theoretical foundation for designing such systems in crowdsourcing. It is intended to contribute to decision support and business analytics research by outlining the capabilities of text mining and machine learning techniques in contexts that face large amounts of user-generated content. For practitioners, we provide a set of generalized design principles and design features for the implementation of these algorithms on crowdsourcing platforms

    Situational Self-Efficacy and Behavioral Responses to Wearable Use

    Get PDF
    Wearables have the potential to optimize health-related behaviors like physical activity and nutritional intake and to improve individual health outcomes. However, researchers are still doubtful about wearables’ capacity to induce behavior change in users. Research that has built on self-efficacy theory has shown that using wearables can influence the users’ perceptions of self-efficacy and behavioral responses both positively and negatively, indicating that there is little stability over time. This study will investigate the factors that cause instability in users’ situational perceptions of selfefficacy and behavioral reactions. We plan to conduct a longitudinal, quasi-experimental field study with wearable users who self-report in weekly intervals on action-related restrictiveness, contextual restrictiveness, personal restrictiveness, situational self-efficacy, and their behavioral responses over eight weeks. Preliminary results from a pilot study with a reduced sample showed promising results. We will contribute to self-efficacy research by clarifying the factors that cause variations in behavioral responses and finding quantitative support for a situationally varying construct of selfefficacy. We will contribute to practice by deriving implications for the design of wearable device

    Understanding the Emergence and Recombination of Distant Knowledge on Crowdsourcing Platforms

    No full text
    Crowdsourcing represents a powerful approach for organizations to engage in distant search and mobilize knowledge distributed amongst a diverse network of people. While organizations generally succeed in generating large amounts of knowledge, they frequently fail to identify useful ideas that have the potential to solve problems or serve as innovation. We combine text mining and network analysis to examine how such contributions emerge on crowdsourcing platforms and how organizations may identify them. We find that useful ideas typically originate from members in a crowd with only few network ties and that these contributions become especially useful when they are enriched with local knowledge provided by experienced members on the platform. We extend existing research by examining the effects of network relationships and knowledge (re)combination in crowdsourcing. We also discuss the potential of network analysis and text mining to support organizations in tracking the origin of contributions and analyzing their content
    corecore